We present extensions to a continuous-state dependency parsing method thatmakes it applicable to morphologically rich languages. Starting with ahigh-performance transition-based parser that uses long short-term memory(LSTM) recurrent neural networks to learn representations of the parser state,we replace lookup-based word representations with representations constructedfrom the orthographic representations of the words, also using LSTMs. Thisallows statistical sharing across word forms that are similar on the surface.Experiments for morphologically rich languages show that the parsing modelbenefits from incorporating the character-based encodings of words.
展开▼